Distributed Extreme Learning Machine for Nonlinear Learning over Network
نویسندگان
چکیده
منابع مشابه
Distributed Extreme Learning Machine for Nonlinear Learning over a Network
Distributed data collection and analysis over a network are ubiquitous, especially over a wireless sensor network (WSN). To our knowledge, the data model used in most of the distributed algorithms is linear. However, in real applications, the linearity of systems is not always guaranteed. In nonlinear cases, the single hidden layer feedforward neural network (SLFN) with radial basis function (R...
متن کاملDistributed Extreme Learning Machine for Nonlinear Learning over Network
Distributed data collection and analysis over a network are ubiquitous, especially over a wireless sensor network (WSN). To our knowledge, the data model used in most of the distributed algorithms is linear. However, in real applications, the linearity of systems is not always guaranteed. In nonlinear cases, the single hidden layer feedforward neural network (SLFN) with radial basis function (R...
متن کاملDynamic Privacy For Distributed Machine Learning Over Network
Privacy-preserving distributed machine learning becomes increasingly important due to the recent rapid growth of data. This paper focuses on a class of regularized empirical risk minimization (ERM) machine learning problems, and develops two methods to provide differential privacy to distributed learning algorithms over a network. We first decentralize the learning algorithm using the alternati...
متن کاملExtreme Learning Machine
Slow speed of feedforward neural networks has been hampering their growth for past decades. Unlike traditional algorithms extreme learning machine (ELM) [5][6] for single hidden layer feedforward network (SLFN) chooses input weight and hidden biases randomly and determines the output weight through linear algebraic manipulations. We propose ELM as an auto associative neural network (AANN) and i...
متن کاملOptimizing Network Performance in Distributed Machine Learning
To cope with the ever growing availability of training data, there have been several proposals to scale machine learning computation beyond a single server and distribute it across a cluster. While this enables reducing the training time, the observed speed up is often limited by network bottlenecks. To address this, we design MLNET, a host-based communication layer that aims to improve the net...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Entropy
سال: 2015
ISSN: 1099-4300
DOI: 10.3390/e17020818